151 research outputs found

    A Continuum Poisson-Boltzmann Model for Membrane Channel Proteins

    Full text link
    Membrane proteins constitute a large portion of the human proteome and perform a variety of important functions as membrane receptors, transport proteins, enzymes, signaling proteins, and more. The computational studies of membrane proteins are usually much more complicated than those of globular proteins. Here we propose a new continuum model for Poisson-Boltzmann calculations of membrane channel proteins. Major improvements over the existing continuum slab model are as follows: 1) The location and thickness of the slab model are fine-tuned based on explicit-solvent MD simulations. 2) The highly different accessibility in the membrane and water regions are addressed with a two-step, two-probe grid labeling procedure, and 3) The water pores/channels are automatically identified. The new continuum membrane model is optimized (by adjusting the membrane probe, as well as the slab thickness and center) to best reproduce the distributions of buried water molecules in the membrane region as sampled in explicit water simulations. Our optimization also shows that the widely adopted water probe of 1.4 {\AA} for globular proteins is a very reasonable default value for membrane protein simulations. It gives an overall minimum number of inconsistencies between the continuum and explicit representations of water distributions in membrane channel proteins, at least in the water accessible pore/channel regions that we focus on. Finally, we validate the new membrane model by carrying out binding affinity calculations for a potassium channel, and we observe a good agreement with experiment results.Comment: 40 pages, 6 figures, 5 table

    LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop

    Full text link
    While there has been remarkable progress in the performance of visual recognition algorithms, the state-of-the-art models tend to be exceptionally data-hungry. Large labeled training datasets, expensive and tedious to produce, are required to optimize millions of parameters in deep network models. Lagging behind the growth in model capacity, the available datasets are quickly becoming outdated in terms of size and density. To circumvent this bottleneck, we propose to amplify human effort through a partially automated labeling scheme, leveraging deep learning with humans in the loop. Starting from a large set of candidate images for each category, we iteratively sample a subset, ask people to label them, classify the others with a trained model, split the set into positives, negatives, and unlabeled based on the classification confidence, and then iterate with the unlabeled set. To assess the effectiveness of this cascading procedure and enable further progress in visual recognition research, we construct a new image dataset, LSUN. It contains around one million labeled images for each of 10 scene categories and 20 object categories. We experiment with training popular convolutional networks and find that they achieve substantial performance gains when trained on this dataset

    SUN3D: A Database of Big Spaces Reconstructed Using SfM and Object Labels

    Get PDF
    Existing scene understanding datasets contain only a limited set of views of a place, and they lack representations of complete 3D spaces. In this paper, we introduce SUN3D, a large-scale RGB-D video database with camera pose and object labels, capturing the full 3D extent of many places. The tasks that go into constructing such a dataset are difficult in isolation -- hand-labeling videos is painstaking, and structure from motion (SfM) is unreliable for large spaces. But if we combine them together, we make the dataset construction task much easier. First, we introduce an intuitive labeling tool that uses a partial reconstruction to propagate labels from one frame to another. Then we use the object labels to fix errors in the reconstruction. For this, we introduce a generalization of bundle adjustment that incorporates object-to-object correspondences. This algorithm works by constraining points for the same object from different frames to lie inside a fixed-size bounding box, parameterized by its rotation and translation. The SUN3D database, the source code for the generalized bundle adjustment, and the web-based 3D annotation tool are all available at http://sun3d.cs.princeton.edu.American Society for Engineering Education. National Defense Science and Engineering Graduate FellowshipUnited States. Office of Naval Research. Multidisciplinary University Research Initiative (N000141010933

    3D ShapeNets: A Deep Representation for Volumetric Shapes

    Full text link
    3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.Comment: to be appeared in CVPR 201
    corecore